library(tidyverse) # for data cleaning and plotting
library(gardenR) # for Lisa's garden data
library(lubridate) # for date manipulation
library(openintro) # for the abbr2state() function
library(palmerpenguins)# for Palmer penguin data
library(maps) # for map data
library(ggmap) # for mapping points on maps
library(gplots) # for col2hex() function
library(RColorBrewer) # for color palettes
library(sf) # for working with spatial data
library(leaflet) # for highly customizable mapping
library(ggthemes) # for more themes (including theme_map())
library(plotly) # for the ggplotly() - basic interactivity
library(gganimate) # for adding animation layers to ggplots
library(transformr) # for "tweening" (gganimate)
library(gifski) # need the library for creating gifs but don't need to load each time
library(shiny) # for creating interactive apps
library(lubridate) # for date manipulation
library(ggthemes) # for even more plotting themes
library(janitor)
theme_set(theme_minimal())
# SNCF Train data
small_trains <- read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-02-26/small_trains.csv")
# Lisa's garden data
data("garden_harvest")
# Lisa's Mallorca cycling data
mallorca_bike_day7 <- read_csv("https://www.dropbox.com/s/zc6jan4ltmjtvy0/mallorca_bike_day7.csv?dl=1") %>%
select(1:4, speed)
# Heather Lendway's Ironman 70.3 Pan Am championships Panama data
panama_swim <- read_csv("https://raw.githubusercontent.com/llendway/gps-data/master/data/panama_swim_20160131.csv")
panama_bike <- read_csv("https://raw.githubusercontent.com/llendway/gps-data/master/data/panama_bike_20160131.csv")
panama_run <- read_csv("https://raw.githubusercontent.com/llendway/gps-data/master/data/panama_run_20160131.csv")
#COVID-19 data from the New York Times
covid19 <- read_csv("https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv")
Go here or to previous homework to remind yourself how to get set up.
Once your repository is created, you should always open your project rather than just opening an .Rmd file. You can do that by either clicking on the .Rproj file in your repository folder on your computer. Or, by going to the upper right hand corner in R Studio and clicking the arrow next to where it says Project: (None). You should see your project come up in that list if you’ve used it recently. You could also go to File –> Open Project and navigate to your .Rproj file.
Put your name at the top of the document.
For ALL graphs, you should include appropriate labels.
Feel free to change the default theme, which I currently have set to theme_minimal().
Use good coding practice. Read the short sections on good code with pipes and ggplot2. This is part of your grade!
NEW!! With animated graphs, add eval=FALSE to the code chunk that creates the animation and saves it using anim_save(). Add another code chunk to reread the gif back into the file. See the tutorial for help.
When you are finished with ALL the exercises, uncomment the options at the top so your document looks nicer. Don’t do it before then, or else you might miss some important warnings and messages.
ggplotly() function.clean_garden <- garden_harvest %>%
filter(vegetable %in% c('tomatoes',
'lettuce',
'beans',
'zucchini',
'cucumbers',
'peas',
'spinach',
'carrots')) %>%
group_by(vegetable, date) %>%
summarize(sum_weight = sum(weight)) %>%
ungroup() %>%
mutate(vegetable = fct_reorder(vegetable, sum_weight, sum, .desc = TRUE))
clean_garden %>%
ggplot(aes(x = date, y = sum_weight, color = vegetable))+
geom_line()+
theme(legend.position = 'none',
text = element_text(size = 10))+
geom_text(aes(label = vegetable))+
labs(title = 'Vegetables with largest harvest in grams during the fall',
x = '',
y = '',
subtitle = "Date: {frame_along}")+
scale_color_manual(values = c('tomatoes'='red',
'zucchini' = 'darkseagreen',
'cucumbers' = 'green',
'beans' = 'burlywood',
'peas' = 'green2',
'carrots' = 'darkorange',
'lettuce' = 'chartreuse',
'spinach' = 'darkgreen'))+
transition_reveal(date)
anim_save("veggies.gif")
knitr::include_graphics("veggies.gif")
income_aggregate <- readr::read_csv('https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2021/2021-02-09/income_aggregate.csv')
income_aggregate %>%
filter(income_quintile %in% c("Top 5%","Highest","Lowest"),
race %in% c("Hispanic","Black Alone", "White Alone", "Asian Alone"),
year %in% c(2009:2020)) %>%
ggplot(aes(x = year, y = income_share, color = income_quintile))+
geom_line(size = 1)+
labs(title = "Income share for different quintiles across time and races",
x = "",
y = "Income share",
color = "Income quintile",
subtitle = "Race: {closest_state}",
caption = "Franco Salinas")+
theme(panel.background = element_rect(fill = "gray"),
axis.text.x.bottom = element_text(size = 0))+
transition_states(race)
anim_save("races.gif")
knitr::include_graphics("races.gif")
small_trains dataset that contains data from the SNCF (National Society of French Railways). These are Tidy Tuesday data! Read more about it here.small_trains %>%
ggplot(aes(y = journey_time_avg,
x = year, color = departure_station)) +
geom_point() +
labs(title = "Journey average time (minutes) across French stations",
subtitle = "Departure Stations:{closest_state}",
x = "",
y = "")+
transition_states(departure_station, transition_length = 2, state_length = 3)+
theme(legend.position = 0)
anim_save("smalltrains.gif")
knitr::include_graphics("smalltrains.gif")
geom_area() examples here). You will look at cumulative harvest of tomato varieties over time. You should do the following:garden_harvest data, filter the data to the tomatoes and find the daily harvest in pounds for each variety.fct_reorder()) from most to least harvested (most on the bottom).I have started the code for you below. The complete() function creates a row for all unique date/variety combinations. If a variety is not harvested on one of the harvest dates in the dataset, it is filled with a value of 0.
garden_harvest %>%
filter(vegetable == "tomatoes") %>%
group_by(date, variety) %>%
summarize(daily_harvest_lb = sum(weight)*0.00220462) %>%
ungroup() %>%
complete(variety, date, fill = list(daily_harvest_lb = 0)) %>%
group_by(variety) %>%
mutate(cumsum = cumsum(daily_harvest_lb)) %>%
ggplot(aes(y = cumsum, x = date,
fill = fct_reorder(variety, daily_harvest_lb))) +
geom_area(position = "stack")+
labs(fill = "Variety",
title = "Cumulative pounds of tomato varieties harvested over time",
x = "",
y = "")+
transition_reveal(date)
anim_save("tomatoes.gif")
knitr::include_graphics("tomatoes.gif")
mallorca_bike_day7 bike ride using animation! Requirements:ggmap.ggimage package and geom_image to add a bike image instead of a red point. You can use this image. See here for an example.mallorca_map <- get_stamenmap(
bbox = c(left = 2.28 , bottom = 39.41, right = 3.03 , top = 39.8),
maptype = "terrain",
zoom = 10
)
ggmap(mallorca_map) +
geom_point(data = mallorca_bike_day7,
aes(x = lon, y = lat),
size = 1, color = "red")+
geom_path(data = mallorca_bike_day7,
aes(x = lon, y = lat, color = ele),
size = .5) +
scale_color_viridis_c(option = "magma") +
theme_map() +
theme(legend.background = element_blank())+
labs(title = "Mallorca ride",
subtitle = "Time:{frame_along}",
color = "Elevation")+
annotate(geom = "text",
x = 2.586255,
y = 39.66033,
label = "Start",
color = "Blue")+
transition_reveal(time)
anim_save("mallorca.gif")
knitr::include_graphics("mallorca.gif")
I prefer this to the static map because I can see the route followed in the correct direction and progression.
panama_swim, panama_bike, and panama_run. Create a similar map to the one you created with my cycling data. You will need to make some small changes:bind_rows(),bind <- bind_rows(panama_swim,panama_bike,panama_run)
bind
panama_map <- get_stamenmap(
bbox = c(left =-79.6311 , bottom = 8.8942, right = -79.4440 , top = 8.9915),
maptype = "terrain",
zoom = 12
)
ggmap(panama_map) +
geom_point(data = bind,
aes(x = lon, y = lat, color = event),
size = 3)+
geom_path(data = bind,
aes(x = lon, y = lat),
size = .5) +
theme_map() +
theme(legend.background = element_blank())+
labs(title = "Panama",
subtitle = "Time:{frame_along}")+
transition_reveal(time)
anim_save("panama.gif")
knitr::include_graphics("panama.gif")
lag() function you’ve used in a previous set of exercises). Replace missing values with 0’s using replace_na().geom_path() and add a group aesthetic.scales::comma is one option. This plot will look pretty ugly as is.geom_point()) and add the state name as a label (geom_text() - you should look at the check_overlap argument).animate() function to have 200 frames in your animation and make it 30 seconds long.cumcases <- covid19 %>%
mutate(cumcases = cumsum(cases),
sevenlag = lag(cumcases,n = 7,order_by = date)) %>%
replace_na(list(sevenlag = 0)) %>%
mutate(newcases = cumcases - sevenlag) %>%
filter(cumcases>20)
cases_animate <- cumcases %>%
ggplot(aes(x = cumcases, y = newcases, group = state)) +
scale_y_log10(labels = scales::comma)+
scale_x_log10(labels = scales::comma)+
geom_path()+
geom_point(aes(color = state))+
geom_text(aes(label = state), check_overlap = TRUE, color = "pink")+
theme(legend.position = 0)+
labs(title = "Relation of new cases and cumulative sum of cases",
y = "New cases",
x = "Cumulative sum of cases")+
transition_reveal(date)
animate(cases_animate, nframes = 200, duration = 30)
anim_save("covid.gif")
knitr::include_graphics("covid.gif")
I observe that New York was the state with the highest proportion of new cases per unit increased in cumulative cases. However, Florida took over in the second half of the time frame.
states_map data. Here is a list of details you should include in the plot:wday() to create a day of week variable and filter to all the Fridays.animate() function to make the animation 200 frames instead of the default 100 and to pause for 10 frames on the end frame.group = date in aes().census_pop_est_2018 <- read_csv("https://www.dropbox.com/s/6txwv3b4ng7pepe/us_census_2018_state_pop_est.csv?dl=1") %>%
separate(state, into = c("dot","state"), extra = "merge") %>%
select(-dot) %>%
mutate(state = str_to_lower(state))
states_map <- map_data("state")
cumulative_10k <- covid19 %>%
mutate(state_name = str_to_lower(state)) %>%
left_join(census_pop_est_2018,
by = c("state_name" = "state")) %>%
group_by(state) %>%
mutate(cases_per_10000 = (cases/est_pop_2018)*10000,
week_day = wday(date)) %>%
filter(week_day == 5)
cases_10k_anim <- cumulative_10k %>%
ggplot() +
geom_map(map = states_map,
aes(map_id = state_name,
fill = cases_per_10000,
group = date))+
expand_limits(x = states_map$long, y = states_map$lat) +
labs(fill = "Cases/10000",
subtitle = "Date:{closest_state}",
title = "States' COVID cumulative cases per 10,000 people")+
theme_map()+
theme(legend.background = element_blank(),
legend.position = "bottom")+
labs(title = "States' COVID cumulative cases per 10,000 people")+
transition_states(date)
animate(cases_10k_anim, nframes = 200, end_pause = 10)
anim_save("covid_10k.gif")
knitr::include_graphics("covid_10k.gif")
The number of cases per 10000 people seems to stop growing earlier for both Oregon and Washington state. The number of cases per 10000 people seems to grow faster at North and South Dakota after October 2020.
shiny app (for next week!)NOT DUE THIS WEEK! If any of you want to work ahead, this will be on next week’s exercises.
app.R file you create. Below, you will post a link to the app that you publish on shinyapps.io. You will create an app to compare states’ cumulative number of COVID cases over time. The x-axis will be number of days since 20+ cases and the y-axis will be cumulative cases on the log scale (scale_y_log10()). We use number of days since 20+ cases on the x-axis so we can make better comparisons of the curve trajectories. You will have an input box where the user can choose which states to compare (selectInput()) and have a submit button to click once the user has chosen all states they’re interested in comparing. The graph should display a different line for each state, with labels either on the graph or in a legend. Color can be used if needed.DID YOU REMEMBER TO UNCOMMENT THE OPTIONS AT THE TOP?